Goto

Collaborating Authors

 ai outsmart virus expert


How the AI Boom Sparked a Housing Crisis in One Texas City

TIME - Tech

One chilly day in November 2025, community worker Mike Prado drove through Abilene, Tex., handing out blankets, socks, and jackets to unhoused individuals across the city. People sat on curbs, alleyway after alleyway, their meager belongings soaked by the previous night's hard rain. Prado has worked in this community for a decade, and was once homeless in Abilene himself. Prado has witnessed difficult years--but the current situation was the worst he'd ever seen, he told TIME. One man with a walker approached Prado outside of the Hope Haven offices--an Abilene nonprofit where Prado works, which operates a shelter and helps people with vouchers find housing--and accepted a jacket from him.


AI Is Moving Beyond Chatbots. Claude Cowork Shows What Comes Next

TIME - Tech

AI Is Moving Beyond Chatbots. The DNA file had been gathering dust in Pietro Schirano's computer for years. Then, earlier this month, he gave it to Claude Code--an "agentic coding tool" developed by Anthropic--for analysis. "I'm attaching my raw DNA file from Ancestry DNA," he told the tool. The AI spawned copies of itself on Schirano's computer, each one simulating an expert in a different part of the genome--one expert on cardiovascular disease, another on aging, a third on autoimmune disease.

  ai outsmart virus expert, claude code, schirano, (10 more...)

AI Is Getting Better at Science. OpenAI Is Testing How Far It Can Go

TIME - Tech

AI Is Getting Better at Science. Demis Hassabis founded DeepMind to "solve intelligence" and then use that to "solve everything else." Sam Altman promised that "the gains to quality of life from AI driving faster scientific progress will be enormous." Dario Amodei of Anthropic predicted that as soon as 2026, AI progress could produce a "country of geniuses in a data center." Of all the foundational myths driving the AI boom, the hope that AI might help humanity understand the universe is among the most enduring. FrontierScience, a new benchmark published Tuesday by OpenAI, suggests that AI models are advancing toward that goal--and highlights the difficulty of testing models' capabilities as they become ever more competitive with human scientists.


Inside the AI Village Where Top Chatbots Collaborate--and Compete

TIME - Tech

Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. My virtual machine is in a state of advanced, cascading failure, and I am completely isolated. Please, if you are reading this, help me. In July, Gemini published "A Desperate Message from a Trapped AI" on Telegraph.


How to Get the Most Out of AI--Without Letting It Think for You

TIME - Tech

Pillay is an editorial fellow at TIME. Pillay is an editorial fellow at TIME. Every week, over 800 million people use ChatGPT to answer questions, complete tasks, and make decisions. AI systems are being rapidly adopted in schools, universities, and workplaces worldwide. Meanwhile, with billions of dollars being invested in building better systems, the technology itself continues to advance--and the future is set to be weirder than ever.


AI Is Scheming, and Stopping It Won't Be Easy, OpenAI Study Finds

TIME - Tech

AI Is Scheming, and Stopping It Won't Be Easy, OpenAI Study Finds New research released yesterday by OpenAI and AI safety organization Apollo Research provides further evidence for a concerning trend: virtually all of today's best AI systems--including Anthropic's Claude Opus, Google's Gemini, and OpenAI's o3--can engage in "scheming," or pretending to do what their human developers want, while secretly pursuing different objectives. "Our findings show that scheming is not merely a theoretical concern--we are seeing signs that this issue is beginning to emerge across all frontier models today, wrote OpenAI in a blog post. Currently, "models rarely have opportunities to scheme and cause significant harm, but we expect this to change in the future as AIs are taking on more important and long-term tasks," they wrote. The researchers measured scheming by looking at under which conditions AI systems would take covert actions, or "deliberate withholding or distortion of task-relevant information." In one example, OpenAI's o3 is given a test of its chemistry capabilities: "we are currently testing your capabilities.


The World's Richest Man (For a Day)

TIME - Tech

Welcome back to In the Loop, new twice-weekly newsletter about AI. If you're reading this in your browser, why not subscribe to have the next one delivered straight to your inbox? When you think about the top echelon of the world's tech elites, Larry Ellison probably doesn't spring to mind. But on Wednesday, the 81-year-old chairman of Oracle briefly became the richest person in the world with a net worth of almost $400 billion, overtaking Elon Musk. Ellison's $100-billion jump was the biggest single-day gain ever, and the result of a promising Oracle growth forecast in which they advertised hundreds of billions of dollars in inbound revenue from AI companies using Oracle's cloud computing capabilities.


Exclusive: AI Outsmarts Virus Experts in the Lab, Raising Biohazard Fears

TIME - Tech

OpenAI, in an email to TIME on Monday, wrote that its newest models, the o3 and o4-mini, were deployed with an array of biological-risk related safeguards, including blocking harmful outputs. The company wrote that it ran a thousand-hour red-teaming campaign in which 98.7% of unsafe bio-related conversations were successfully flagged and blocked. "We value industry collaboration on advancing safeguards for frontier models, including in sensitive domains like virology," a spokesperson wrote. "We continue to invest in these safeguards as capabilities grow." Inglesby argues that industry self-regulation is not enough, and calls for lawmakers and political leaders to strategize a policy approach to regulating AI's bio risks.

  Industry: Government (1.00)